Context Sequence Model of Speech Production Enriched with Articulatory Features
نویسندگان
چکیده
This study describes integration of an articulatory factor into the exemplar-based Context Sequence Model of speech production, CSM [10], which builds on the concept of a speech perceptionproduction loop. It has been demonstrated that selection of new exemplars for speech production is based on about 0.5 s of preceding acoustic context and following linguistic match of the exemplars. This investigation presents the role of the articulatory features integrated in the exemplar weighing processes.
منابع مشابه
Session 2aSC: Linking Perception and Production (Poster Session) 2aSC47. Acoustic and articulatory information as joint factors coexisting in the context sequence model of speech production
This simulation study presents the integration of an articulatory factor into the Context Sequence Model (CSM) (Wade et al., 2010) of speech production using Polish sonorant data measured with the Electromagnetic Articulograph technology (EMA) (Mücke et al., 2010). Based on exemplar-theoretic assumptions (Pierrehumbert 2001), the CSM models the speech production-perception loop operating on a s...
متن کاملModeling Multi-modal Factors in Speech Production with the Context Sequence Model
This article describes modeling speech production with multi-modal factors integrated into the Context Sequence Model (Wade et al. 2010). It is posited that articulatory information can be successfully incorporated and stored in parallel to the acoustic information in a speech production model. Results demonstrate that a memory sensitive to rich context and enlarged by the additional inputs fac...
متن کاملBetter HMM-Based Articulatory Feature Extraction with Context-Dependent Model
The majority of speech recognition systems today commonly use Hidden Markov Models (HMMs) as acoustic models in systems since they can powerfully train and map a speech utterance into a sequence of units. Such systems perform even better if the units are context-dependent. Analogously, when HMM techniques are applied to the problem of articulatory feature extraction, contextdependent articulato...
متن کاملArticulatory Features for Robust Visual Speech Recognition by Ekaterina Saenko
This thesis explores a novel approach to visual speech modeling. Visual speech, or a sequence of images of the speaker's face, is traditionally viewed as a single stream of contiguous units, each corresponding to a phonetic segment. These units are defined heuristically by mapping several visually similar phonemes to one visual phoneme, sometimes referred to as a viseme. However, experimental e...
متن کاملBacking-off Context- & Gender-dependent Models for Better Articulatory Feature Extraction
The majority of speech recognition systems today commonly use Hidden Markov Models (HMMs) as acoustic models in systems since they can powerfully train and map a speech utterance into a sequence of units. Such systems perform even better if the units employed are context-dependent and gender-dependent. Analogously, when HMM technology is applied to the problem of articulatory feature extraction...
متن کامل